6 research outputs found

    A Supervised Machine Learning Based Intrusion Detection Model for Detecting Cyber-Attacks Against Computer System

    Get PDF
    Internet usage has become essential for correspondence in almost every calling in our digital age. To protect a network, an effective intrusion detection system (IDS) is vital. Intrusion Detection System is a software application to detect network intrusion using various machine learning algorithms. The function of the expert has been lessened by machine learning approaches since knowledge is taken directly from the data. The fact that it makes use of all the features of an information packet spinning in the network for intrusion detection is weakened by the employment of various methods for detecting intrusions, such as statistical models, safe system approaches, etc. Machine learning has become a fundamental innovation for cyber security. Two of the key types of attacks that plague businesses, as proposed in this paper, are Denial of Service (DOS) and Distributed Denial of Service (DDOS) attacks. One of the most disastrous attacks on the Internet of Things (IOT) is a denial of service.  Two diverse Machine Learning techniques are proposed in this research work, mainly Supervised learning. To achieve this goal, the paper represents a regression algorithm, which is usually used in data science and machine learning to forecast the future. An innovative approach to detecting is by using the Machine Learning algorithm by mining application-specific logs. Cyber security is a way of providing their customers the peace of mind they need knowing that they have secured their information and money

    Hardware-In-the-loop simulation platform for the design, testing and validation of autonomous control system for unmanned underwater vehicle

    No full text
    575-580Significant advances in various relevant science and engineering disciplines have propelled the development of more advanced, yet reliable and practical underwater vehicles. A great array of vehicle types and applications has been produced along with a wide range of innovative approaches for enhancing the performance of unmanned underwater vehicle (UUV). These recent advances enable the extension of UUVsā€™ flight envelope comparable to that of manned vehicles. For undertaking longer missions, therefore more advanced control and navigation will be required to maintain an accurate position over larger operational envelope particularly when a close proximity to obstacles (such as manned vehicles, pipelines, underwater structures) is involved. In this case, a sufficiently good model is prerequisite of control system design. System evaluation and testing of unmanned underwater vehicles in certain environment can be tedious, time consuming and expensive. This paper, focused on developing dynamic model of UUV for the purpose of guidance and control. Along with this a HILS (Hardware-In-the-Loop Simulation) based novel framework for rapid construction of testing scenarios with embedded systems has been investigated. The modeling approach is implemented for the AUV Squid, an autonomous underwater vehicle that was designed, developed and tested by research team at Center for Unmanned System Studies at InstitutTeknologi Bandung

    Cluster-Factors of Mobile Sensor Network Technology for Security Enhanced PEGASIS

    No full text
    Mobile wireless sensor networks (MWSNs) have been a hot topic of research, and numerous routing methods have been developed to increase energy efficiency and extend longevity. Nodes close to the sink often use more energy to transmit data from their neighbors to the sink, which causes them to run out of energy faster. These places are also referred to as rendezvous points, and choosing the best one is a hard task. The likelihood of choosing an ideal node as the rendezvous point will be extremely low because hierarchical algorithms only use their local information to select these places. The warm spot problem is addressed from four angles in this work using the Enhanced Power-Efficient Gathering in Sensor Information Systems (EPEGASIS) technique. In order to limit the amount of energy used during transmission, the ideal communication distance is first calculated. To balance the energy consumption among the nodes, mobile sink technology is employed once a threshold value is set to safeguard the dying nodes. The node can then modify its communication range based on how far away the sink node is from it. Thorough testing has been done to demonstrate that our suggested EPEGASIS works better in terms of longevity, drive usage, and web latency

    Multiagent Reinforcement Learning Based on Fusion-Multiactor-Attention-Critic for Multiple-Unmanned-Aerial-Vehicle Navigation Control

    No full text
    The proliferation of unmanned aerial vehicles (UAVs) has spawned a variety of intelligent services, where efficient coordination plays a significant role in increasing the effectiveness of cooperative execution. However, due to the limited operational time and range of UAVs, achieving highly efficient coordinated actions is difficult, particularly in unknown dynamic environments. This paper proposes a multiagent deep reinforcement learning (MADRL)-based fusion-multiactor-attention-critic (F-MAAC) model for multiple UAVsā€™ energy-efficient cooperative navigation control. The proposed model is built on the multiactor-attention-critic (MAAC) model, which offers two significant advances. The first is the sensor fusion layer, which enables the actor network to utilize all required sensor information effectively. Next, a layer that computes the dissimilarity weights of different agents is added to compensate for the information lost through the attention layer of the MAAC model. We utilize the UAV LDS (logistic delivery service) environment created by the Unity engine to train the proposed model and verify its energy efficiency. The feature that measures the total distance traveled by the UAVs is incorporated with the UAV LDS environment to validate the energy efficiency. To demonstrate the performance of the proposed model, the F-MAAC model is compared with several conventional reinforcement learning models with two use cases. First, we compare the F-MAAC model to the DDPG, MADDPG, and MAAC models based on the mean episode rewards for 20k episodes of training. The two top-performing models (F-MAAC and MAAC) are then chosen and retrained for 150k episodes. Our study determines the total amount of deliveries done within the same period and the total amount done within the same distance to represent energy efficiency. According to our simulation results, the F-MAAC model outperforms the MAAC model, making 38% more deliveries in 3000 time steps and 30% more deliveries per 1000 m of distance traveled

    Multiagent Reinforcement Learning Based on Fusion-Multiactor-Attention-Critic for Multiple-Unmanned-Aerial-Vehicle Navigation Control

    No full text
    The proliferation of unmanned aerial vehicles (UAVs) has spawned a variety of intelligent services, where efficient coordination plays a significant role in increasing the effectiveness of cooperative execution. However, due to the limited operational time and range of UAVs, achieving highly efficient coordinated actions is difficult, particularly in unknown dynamic environments. This paper proposes a multiagent deep reinforcement learning (MADRL)-based fusion-multiactor-attention-critic (F-MAAC) model for multiple UAVs’ energy-efficient cooperative navigation control. The proposed model is built on the multiactor-attention-critic (MAAC) model, which offers two significant advances. The first is the sensor fusion layer, which enables the actor network to utilize all required sensor information effectively. Next, a layer that computes the dissimilarity weights of different agents is added to compensate for the information lost through the attention layer of the MAAC model. We utilize the UAV LDS (logistic delivery service) environment created by the Unity engine to train the proposed model and verify its energy efficiency. The feature that measures the total distance traveled by the UAVs is incorporated with the UAV LDS environment to validate the energy efficiency. To demonstrate the performance of the proposed model, the F-MAAC model is compared with several conventional reinforcement learning models with two use cases. First, we compare the F-MAAC model to the DDPG, MADDPG, and MAAC models based on the mean episode rewards for 20k episodes of training. The two top-performing models (F-MAAC and MAAC) are then chosen and retrained for 150k episodes. Our study determines the total amount of deliveries done within the same period and the total amount done within the same distance to represent energy efficiency. According to our simulation results, the F-MAAC model outperforms the MAAC model, making 38% more deliveries in 3000 time steps and 30% more deliveries per 1000 m of distance traveled
    corecore